Skip to content

Attribution data (feature 36/37) #1044

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

joostjager
Copy link
Collaborator

@joostjager joostjager commented Nov 21, 2022

Failure attribution is important to properly penalize nodes after a payment failure occurs. The goal of the penalty is to give the next attempt a better chance at succeeding. In the happy failure flow, the sender is able to determine the origin of the failure and penalizes a single node or pair of nodes.

Unfortunately it is possible for nodes on the route to hide themselves. If they return random data as the failure message, the sender won't know where the failure happened.

This PR proposes a new failure message format that lets each node commit to the failure message. If one of the nodes corrupts the failure message, the sender will be able to identify that node.

For more information, see https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-October/003723.html.

Furthermore, the htlc fail and fulfill flows are extended to convey self-reported htlc hold times to the sender.

Fail flow implementations

LND implementation: lightningnetwork/lnd#9888

LDK implementation: lightningdevkit/rust-lightning#2256

Eclair implementation: ACINQ/eclair#3065

CLN implementation: ElementsProject/lightning#8291

Fulfill flow implementations

LDK implementation: lightningdevkit/rust-lightning#3801

@thomash-acinq
Copy link

I've started implementing it in eclair, do you have some test vectors so we can check that we are compatible?
The design seems good to me, but as I've said previously, I think keeping hop payloads and hmacs for 8 nodes only (instead of 27) is enough for almost all use cases and would give us huge size savings.

@joostjager
Copy link
Collaborator Author

joostjager commented Dec 6, 2022

I don't have test vectors yet, but I can produce them. Will add them to this PR when ready.

Capping the max hops at a lower number is fine to me, but do you have a scenario in mind where this would really make the difference? Or is it to more generally that everything above 8 is wasteful?

@joostjager joostjager force-pushed the fat-errors branch 2 times, most recently from 4b48481 to 24b10d5 Compare December 6, 2022 16:52
@joostjager
Copy link
Collaborator Author

@thomash-acinq added a happy fat error test vector.

09-features.md Outdated
@@ -41,6 +41,7 @@ The Context column decodes as follows:
| 20/21 | `option_anchor_outputs` | Anchor outputs | IN | `option_static_remotekey` | [BOLT #3](03-transactions.md) |
| 22/23 | `option_anchors_zero_fee_htlc_tx` | Anchor commitment type with zero fee HTLC transactions | IN | `option_static_remotekey` | [BOLT #3][bolt03-htlc-tx], [lightning-dev][ml-sighash-single-harmful]|
| 26/27 | `option_shutdown_anysegwit` | Future segwit versions allowed in `shutdown` | IN | | [BOLT #2][bolt02-shutdown] |
| 28/29 | `option_fat_error` | Can generate/relay fat errors in `update_fail_htlc` | IN | | [BOLT #4][bolt04-fat-errors] |
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this big gap in the bits has emerged here because of tentative spec changes that may or may not make it. Not sure why that is necessary. I thought for unofficial extensions, the custom range is supposed to be used?

I can see that with unofficial features deployed in the wild, it is easier to keep the same bit when something becomes official. But not sure if that is worth creating the gap here? An alternative is to deploy unofficial features in the custom range first, and then later recognize both the official and unofficial bit. Slightly more code, but this feature list remains clean.

@joostjager
Copy link
Collaborator Author

Added fat error signaling to the PR.

@thomash-acinq
Copy link

I've spent a lot of time trying to make the test vector pass and I've finally found what was wrong:
In the spec you write that the hmac covers

  • failure_len, failuremsg, pad_len and pad.

  • The first y+1 payloads in payloads. For example, hmac_0_2 would cover all three payloads.

  • y downstream hmacs that correspond to downstream node positions relative to x. For example, hmac_0_2 would cover hmac_1_1 and hmac_2_0.

implying that we need to concatenate them in that order. But in your code you follow a different order:

// Include payloads including our own.
_, _ = hash.Write(payloads[:(NumMaxHops-position)*payloadLen])

// Include downstream hmacs.
var hmacsIdx = position + NumMaxHops
for j := 0; j < NumMaxHops-position-1; j++ {
	_, _ = hash.Write(
		hmacs[hmacsIdx*sha256.Size : (hmacsIdx+1)*sha256.Size],
	)

	hmacsIdx += NumMaxHops - j - 1
}

// Include message.
_, _ = hash.Write(message)

I think the order message + hop payloads + hmacs is more intuitive as it matches the order of the fields in the packet.

@joostjager
Copy link
Collaborator Author

Oh great catch! Will produce a new vector.

@joostjager
Copy link
Collaborator Author

@thomash-acinq updated vector

@joostjager
Copy link
Collaborator Author

Updated LND implementation with sender-picked fat error structure parameters: lightningnetwork/lnd#7139

@t-bast
Copy link
Collaborator

t-bast commented May 20, 2025

can't senders opt-in to privacy-preserving random delays via the forward onion?

That could be an interesting trade-off. The sender would include a TLV in the onion to ask relaying nodes to add a random delay? We'd go from delays/batching decided by the intermediate nodes to something that is driven by the senders instead. At first glance, I think it would work.

The downside would be that it's off by default (because most app developers will care more about UX than privacy), while I believe that Matt would like basic privacy features to be on by default at the network level.

I personally think using 100ms precision would be a good trade-off: batch/delays of this order of magnitude still allow for sub-second payment latency for paths that aren't too long, which I believe is good enough in terms of payment experience while offering some privacy if done at the network level.

I'm more in favor of always having 100ms precision (instead of 1ms precision), but if there is a larger consensus for 1ms precision, I'd be ok with it! Not sure how to move forward though: should we just have a small survey among implementers?

@tnull
Copy link
Contributor

tnull commented May 20, 2025

Regarding hold time reporting, selection pressure in pathfinding, and its potential negative effect on privacy: can't senders opt-in to privacy-preserving random delays via the forward onion? That way there is no loss of precision and nodes can still serve both privacy-valueing users as well as the ones that aim for the absolute fastest routes.

Well, I don't think this is a good idea as "privacy loves company" and having senders signal "hey, look at me, I want to be more private - I have something to hide" might actually have them stand out more overall.

The preferred outcome would really be that we agree on a reasonable delay threshold that we encourage for all forwarding nodes. If all just do a little, it might introduce sufficient noise to throw on-path and AS-level attackers off.

I'm def. +1 for 100ms buckets, which should be mostly sufficient to cover at least 1-2 RTTs of the handshake, hence introducing at least some degree of uncertainty on the attacker's side.

@joostjager
Copy link
Collaborator Author

There are users who don't like privacy delays. They patch their nodes to remove the delay. And surely there will also be users who don't like coarse grained hold times.

Do we really want to force this trade off onto them via the protocol without a way to opt-out, and rule out possibly undiscovered use cases that require ultra-fast payment settlement?

Picking a constant also doesn't look particularly attractive to me.

@t-bast
Copy link
Collaborator

t-bast commented May 20, 2025

Do we really want to force this trade off onto them via the protocol without a way to opt-out, and rule out possibly undiscovered use cases that require ultra-fast payment settlement?

If you require ultra-fast settlement, there are always cases where multi-hop payments will fail you: shouldn't you just open a channel to your destination instead?

@joostjager
Copy link
Collaborator Author

I still believe in a future of the lightning network where failures aren't tolerated at all, and all routing nodes deliver excellent service. Then it's just a matter of measuring latency and going for the sub-10 ms ones.

@joostjager
Copy link
Collaborator Author

Hold time filtering to avoid users doing things we think they shouldn't. OP_RETURN vibes 😂

@t-bast
Copy link
Collaborator

t-bast commented May 20, 2025

I still believe in a future of the lightning network where failures aren't tolerated at all, and all routing nodes deliver excellent service.

I agree, but that's orthogonal to payment latency? I honestly don't understand why it's important to aim for minimal latency: is it for performance's sake, to boast about numbers? A payment that takes 500ms to complete is almost indistinguishable from being instantaneous for our brains, so why does it matter that it takes 50ms instead of 500ms if you have to sacrifice privacy for it?

@joostjager
Copy link
Collaborator Author

For me it is two-fold:

  • I don't want to rule out potential future uses cases that somehow do require minimal latency.

  • It is unclear to me how much an arbitrary 100 ms granularity is going to help with privacy today and also in a future in which we are stuck with this hard-coded value in the protocol. Yes it probably helps, but it may not be substantial.

Furthermore I think that an instant instant payment with no observable latency at all is just cool too indeed.

@GeorgeTsagk
Copy link

Are we talking about introducing actual time delays here, or just bucketing the reported value of the hold time?

If we're talking about further delaying the actual forwarding of the HTLC then I'm very much against it. Keep in mind this is done per payment attempt so if I need to try out 20 different routes this will accumulate quite fast, significantly killing ux.

Seems like the concern is that the accurate reporting of hold times does not directly ruin the privacy for the nodes reporting the hold times, but will eventually lead to intense competition around latency, contributing to further centralization in the network?

If we're talking about hold times leaking information about the routing nodes (fingerprinting?) then this is already too noisy to produce consistent results. Every node has different hardware and internet connectivity configuration, I wouldn't expect all LNDs or LDKs to have some entropy in their reports, and there exist better ways to fingerprint anyway.

I don't think we should pollute the protocol with this kind of safeguard/filter against more precise reports. If we assume that a user is willing to get their hands dirty to somehow signal lower hold times, they'll find a way. Also let's not underestimate the power of what implementations default to.

By having implementations default to scoring all hold times <=100ms the same, or reporting buckets of 100ms by default, most of the concerns seem to be eliminated. Of course, if we assume the majority of the network to be running custom software then what's even the point of this discussion?

thomash-acinq added a commit to ACINQ/eclair that referenced this pull request May 20, 2025
@DerEwige
Copy link

I’m very much against the intentional slowing down of HTLC resolves.

Every milli second a HTLC is unresolved it poses a risk to the node runner.
One of the common causes of force closures are stuck HTLCs.

A HTLC gets stuck when a node goes offline after the HTLC passed that node.
So the failure or success can not propagate backwards through it.
The chances of a HTLC getting stuck is proportional to the time the HTLC needs to resolve.

So if you intentionally slow down the HTLC resolve speed, you also increase the number of force closures.

I would consider the intentional hold of a HTLC of more than 10ms as an jamming attack.

@t-bast
Copy link
Collaborator

t-bast commented May 20, 2025

A HTLC gets stuck when a node goes offline after the HTLC passed that node.
So the failure or success can not propagate backwards through it.
The chances of a HTLC getting stuck is proportional to the time the HTLC needs to resolve.
So if you intentionally slow down the HTLC resolve speed, you also increase the number of force closures.

That's not at all a valid conclusion. HTLCs get stuck and channels force-close because of bugs, not because nodes go temporarily offline or introduce batching/delays. That's completely unrelated.

I 100% agree with you that all bugs that lead to stuck HTLCs or force-closed channels must be fixed and should be the highest priority for all implementations. But that has absolutely nothing to do with whether or not we should introduce delays or measure relay latency in buckets of 100ms.

@DerEwige
Copy link

That's not at all a valid conclusion. HTLCs get stuck and channels force-close because of bugs, not because nodes go temporarily offline or introduce batching/delays. That's completely unrelated.

I 100% agree with you that all bugs that lead to stuck HTLCs or force-closed channels must be fixed and should be the highest priority for all implementations. But that has absolutely nothing to do with whether or not we should introduce delays or measure relay latency in buckets of 100ms.

I do agree, that it is usually a bug or sometimes negligence, that is the root cause of most force-closes.

I also totally agree that there should be an easy way to measure relay latency.
Currently the only way to measure relay latency is by aggregating a lot of data (100’000+ payments) and calculate the time for each channel from that data.

In fact I do exactly this to generate the list you can find here on the “node speeds” tab
https://docs.google.com/spreadsheets/d/1gn2TY3zDoPdk46yYQfRhGb2_YDepQEn7l3qtHWCxov4
And I would love to have simpler and more accurate way to do this. And 100ms accuracy in the reported time would also be good enough.

But saying that adding a relay delay would not have an impact of the number of force-closes is - in my opinion - wrong.

If a channel with active HTLC goes offline and does not come online again before the HTLC reaches its timeout, this will lead to a force-close.
The chance of a channel going offline with an active HTLC on it increases with the time a HTLC lingers on the channel.

I send out more than 100’000 HTLCs each day.
0.1s extra per hop would add hours of exposure each day.

@carlaKC
Copy link
Contributor

carlaKC commented May 23, 2025

Wrote up some thoughts on delving that I think are worth considering when we think about latency measurements and privacy: https://delvingbitcoin.org/t/latency-and-privacy-in-lightning/1723.

@joostjager joostjager mentioned this pull request May 27, 2025
@joostjager
Copy link
Collaborator Author

I did a rough implementation in LDK of hold times for the success case. It turned out to be straight-forward, mostly reusing code from the failure case. Test vectors in #1261

Copy link
Contributor

@carlaKC carlaKC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logic all looks good, but would like to restructure the Returning Errors section to follow the requirements / rationale structure that we have elsewhere in the spec - as is the two are interspersed.

Also needs to refer to option_attributable_failures rather than "attributable failures" in a few place.

09-features.md Outdated
@@ -46,6 +46,7 @@ The Context column decodes as follows:
| 26/27 | `option_shutdown_anysegwit` | Future segwit versions allowed in `shutdown` | IN | | [BOLT #2][bolt02-shutdown] |
| 28/29 | `option_dual_fund` | Use v2 of channel open, enables dual funding | IN | | [BOLT #2](02-peer-protocol.md) |
| 34/35 | `option_quiesce` | Support for `stfu` message | IN | | [BOLT #2][bolt02-quiescence] |
| 36/37 | `option_attributable_failure` | Can generate/relay attributable failures in `update_fail_htlc` | IN9 | | [BOLT #4][bolt04-attributable-failures] |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't need to be in invoices (9)?

Copy link
Collaborator Author

@joostjager joostjager Jun 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is indeed the question whether there are scenarios where senders would refuse a payment that pays to a node that doesn't support attribution data? If they'd return random data, and their predecessor adds attribution data, blame still lands on the final node pair.

Maybe attribution data is unnecessary for the final hop? 🤔 Of course they still want to pass back something to not stand out as a recipient.

For hold times, they will probably always report zero anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have any instructions in this PR for how to react to the presence / absence of this feature bit in an invoice, so I think we can take it out?

Seems easy enough to come back and add signaling in invoices (+ handling instructions) if we want it in the future?

@@ -579,6 +579,9 @@ should add a random delay before forwarding the error. Failures are likely to
be probing attempts and message timing may help the attacker infer its distance
to the final recipient.

Note that nodes on the blinded route return failures through `update_fail_malformed_htlc` and therefore do not and can
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: note that nodes on -> note that nodes in?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Language skills insufficient for me to flag 'on the route' as incorrect.

Upon receiving a return packet, each hop generates its `ammag`, generates the
pseudo-random byte stream, and applies the result to the return packet before
return-forwarding it.
When supported, the erring node should also populate the `attribution_data` field in `update_fail_htlc` consisting of the following data:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we can more formally specify this:

  • if option_attributable_failure is advertised:
    • if path_key is not set in the incoming update_add_htlc:
      • MUST include htlc_hold_times in payload.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

Comment on lines 1063 to 1065
1. data:
* [`20*u32`:`htlc_hold_times`]
* [`210*sha256[..4]`:`truncated_hmacs`]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicated here and in BOLT02 - perhaps just link to the BOLT02 description?

In addition, each node locally stores data regarding its own sending peer in the
route, so it knows where to return-forward any eventual return packets.
The node generating the error message (_erring node_) builds a return packet

## Erring node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General comment on this section: I'm missing the structure of requirements/rationale that we have elsewhere in the specification.

For example, the section for the erring node notes "The sender can use this information to score nodes on latency". To me this should be in the origin node section or all contained in a single rationale at the end. Ditto with the game theory of guessing HMACs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean. I've added a commit trying to address this and improve it a little, but it remains difficult to capture this in requirements and still have explanation too I found.

@joostjager joostjager changed the title Attributable failures (feature 36/37) Attributable data (feature 36/37) Jun 5, 2025
@joostjager joostjager changed the title Attributable data (feature 36/37) Attribution data (feature 36/37) Jun 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.